27 research outputs found

    Deep learning investigation for chess player attention prediction using eye-tracking and game data

    Get PDF
    This article reports on an investigation of the use of convolutional neural networks to predict the visual attention of chess players. The visual attention model described in this article has been created to generate saliency maps that capture hierarchical and spatial features of chessboard, in order to predict the probability fixation for individual pixels Using a skip-layer architecture of an autoencoder, with a unified decoder, we are able to use multiscale features to predict saliency of part of the board at different scales, showing multiple relations between pieces. We have used scan path and fixation data from players engaged in solving chess problems, to compute 6600 saliency maps associated to the corresponding chess piece configurations. This corpus is completed with synthetically generated data from actual games gathered from an online chess platform. Experiments realized using both scan-paths from chess players and the CAT2000 saliency dataset of natural images, highlights several results. Deep features, pretrained on natural images, were found to be helpful in training visual attention prediction for chess. The proposed neural network architecture is able to generate meaningful saliency maps on unseen chess configurations with good scores on standard metrics. This work provides a baseline for future work on visual attention prediction in similar contexts

    Multimodal Observation and Interpretation of Subjects Engaged in Problem Solving

    Get PDF
    In this paper we present the first results of a pilot experiment in the capture and interpretation of multimodal signals of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other physiological signals can be used to model the cognitive state of subjects, and to explore the integration of multiple sensor modalities to improve the reliability of detection of human displays of awareness and emotion. We observed chess players engaged in problems of increasing difficulty while recording their behavior. Such recordings can be used to estimate a participant's awareness of the current situation and to predict ability to respond effectively to challenging situations. Results show that a multimodal approach is more accurate than a unimodal one. By combining body posture, visual attention and emotion, the multimodal approach can reach up to 93% of accuracy when determining player's chess expertise while unimodal approach reaches 86%. Finally this experiment validates the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction and/or problem solving

    Whole-genome sequencing reveals host factors underlying critical COVID-19

    Get PDF
    Critical COVID-19 is caused by immune-mediated inflammatory lung injury. Host genetic variation influences the development of illness requiring critical care1 or hospitalization2–4 after infection with SARS-CoV-2. The GenOMICC (Genetics of Mortality in Critical Care) study enables the comparison of genomes from individuals who are critically ill with those of population controls to find underlying disease mechanisms. Here we use whole-genome sequencing in 7,491 critically ill individuals compared with 48,400 controls to discover and replicate 23 independent variants that significantly predispose to critical COVID-19. We identify 16 new independent associations, including variants within genes that are involved in interferon signalling (IL10RB and PLSCR1), leucocyte differentiation (BCL11A) and blood-type antigen secretor status (FUT2). Using transcriptome-wide association and colocalization to infer the effect of gene expression on disease severity, we find evidence that implicates multiple genes—including reduced expression of a membrane flippase (ATP11A), and increased expression of a mucin (MUC1)—in critical disease. Mendelian randomization provides evidence in support of causal roles for myeloid cell adhesion molecules (SELE, ICAM5 and CD209) and the coagulation factor F8, all of which are potentially druggable targets. Our results are broadly consistent with a multi-component model of COVID-19 pathophysiology, in which at least two distinct mechanisms can predispose to life-threatening disease: failure to control viral replication; or an enhanced tendency towards pulmonary inflammation and intravascular coagulation. We show that comparison between cases of critical illness and population controls is highly efficient for the detection of therapeutically relevant mechanisms of disease

    Whole-genome sequencing reveals host factors underlying critical COVID-19

    Get PDF
    Critical COVID-19 is caused by immune-mediated inflammatory lung injury. Host genetic variation influences the development of illness requiring critical care1 or hospitalization2,3,4 after infection with SARS-CoV-2. The GenOMICC (Genetics of Mortality in Critical Care) study enables the comparison of genomes from individuals who are critically ill with those of population controls to find underlying disease mechanisms. Here we use whole-genome sequencing in 7,491 critically ill individuals compared with 48,400 controls to discover and replicate 23 independent variants that significantly predispose to critical COVID-19. We identify 16 new independent associations, including variants within genes that are involved in interferon signalling (IL10RB and PLSCR1), leucocyte differentiation (BCL11A) and blood-type antigen secretor status (FUT2). Using transcriptome-wide association and colocalization to infer the effect of gene expression on disease severity, we find evidence that implicates multiple genes—including reduced expression of a membrane flippase (ATP11A), and increased expression of a mucin (MUC1)—in critical disease. Mendelian randomization provides evidence in support of causal roles for myeloid cell adhesion molecules (SELE, ICAM5 and CD209) and the coagulation factor F8, all of which are potentially druggable targets. Our results are broadly consistent with a multi-component model of COVID-19 pathophysiology, in which at least two distinct mechanisms can predispose to life-threatening disease: failure to control viral replication; or an enhanced tendency towards pulmonary inflammation and intravascular coagulation. We show that comparison between cases of critical illness and population controls is highly efficient for the detection of therapeutically relevant mechanisms of disease

    Estimating Expertise from Eye Gaze and Emotions

    No full text
    Dans cette thèse, nous explorons comment de telles théories issues des sciences cognitives peuvent servir de base à l'informatique pour favoriser l'émergence des technologies d'intelligence artificielle collaborative.En particulier, nous utilisons l'observation d'humains ayant différents niveaux d'expertise engagés dans la résolution de problèmes d'échecs classiques pour explorer l'efficacité des modèles pour l'attention visuelle, la prise de conscience, la compréhension et la résolution de problèmes.Nous avons construit un instrument pour la capture et l'interprétation de signaux multimodaux d'humains engagés dans la résolution de problèmes.Notre instrument permet d'enregistrer la posture du corps, les gestes, les expressions faciales, la dilatation de la pupille et les trajectoires oculaires, ainsi que les interactions du joueur avec le problème des échecs.Combinés aux rapports verbaux des joueurs, ces enregistrements permettent de construire des modèles informatiques pour la prise de conscience et la compréhension de la situation de jeu lors de la résolution de problèmes en utilisant des concepts et des modèles issus de la littérature des sciences cognitives.Dans le cadre d'une première expérience, les joueurs d'échecs ont été enregistrés alors qu'ils étaient engagés dans des problèmes de difficulté croissante.Ces enregistrements ont été utilisés pour estimer la conscience qu'avait un participant de la situation actuelle et pour prédire la capacité à répondre efficacement aux menaces et aux opportunités.L'analyse des enregistrements montre comment le regard, la posture du corps et les caractéristiques émotionnelles peuvent être utilisés pour capturer et modéliser la conscience de la situation.Cette expérience a validé l'utilisation de notre équipement comme outil général et reproductible pour l'étude des participants engagés dans une interaction sur écran impliquant la résolution de problèmes et a suggéré des améliorations possibles pour de futures expériences.Ces premières expériences ont révélé une observation inattendue de changements rapides dans les émotions des joueurs qui tentent de résoudre des problèmes difficiles.Les tentatives d'explication de cette observation nous ont amenés à explorer le rôle de l'émotion dans le raisonnement lors de la résolution de problèmes.Dans la deuxième partie de la thèse, nous passons en revue la littérature sur les émotions et proposons un modèle cognitif qui décrit comment les émotions influencent le processus par lequel les sujets sélectionnent des éléments cognitives (concepts) à utiliser dans l'interprétation d'une situation de jeu.En particulier, il est bien connu que la résolution de problèmes est fortement contrainte par les limites du nombre de phénomènes qui peuvent être considérés à la fois.Pour surmonter cette limite, les experts humains s'appuient sur l'abstraction pour former de nouveaux concepts à partir de phénomènes émotionnellement marqués.Nos expériences indiquent que l'émotion joue un rôle important, non seulement dans la formation des concepts mais aussi dans la sélection de ceux-ci dans le raisonnement.Nous émettons l'hypothèse que les experts conservent les associations de concepts et d'émotions dans la mémoire à long terme et les utilisent pour guider la sélection des concepts pour le raisonnement.Ce point de vue est conforme à l'hypothèse du marqueur somatique de Damasio (de 1991), qui avance que les émotions guident le comportement, en particulier lorsque les processus cognitifs sont surchargés.Nous présentons les premiers résultats d'une expérience conçue pour explorer la fidélité de notre modèle et pour rechercher des preuves du rôle des émotions dans la résolution des problèmes.Notre modèle suggère qu'une association des émotions avec des situations reconnues guide les experts dans leur sélection de configurations de jeu partielles à utiliser pour explorer l'arbre de jeu.In this thesis, we are concerned with enabling technologies for collaborative intelligent systems.Effective collaboration requires that both the human and the computer share an understanding of their respective roles and abilities.In particular, it requires an ability to monitor the intentions and awareness of the partner in order to determine appropriate actions and behaviors.Cognitive science has much to offer in such an effort.In recent decades, researchers in cognitive science have developed theories and models that describe human abilities for attention, awareness, understanding, and problem-solving.In this thesis, we explore how such theories can inform informatics to enable technologies for Collaborative Artificial Intelligence.In particular, we use observations of humans with different levels of expertise engaged in solving classic chess problems to explore the effectiveness of models for visual attention, awareness, understanding, and problem-solving.We have constructed an instrument for capturing and interpreting multimodal signals of humans engaged in solving problems using off-the-shelf commercially available components combined with in-house software.Our instrument makes it possible to record body posture, gestures, facial expressions, pupil dilation, eye-scan, and fixation, as well as player interactions with the chess problem.When combined with self-reports, these recordings make it possible to construct computer models for the awareness and understanding of the game situation during problem-solving using concepts and models from cognitive science literature.As a first experiment, chess players were recorded while engaged in problems of increasing difficulty.These recordings were used to estimate a participant’s awareness of the current situation and to predict the ability to respond effectively to threats and opportunities.Analysis of the recordings demonstrates how eye-gaze, body posture, and emotional features can be used to capture and model situation awareness.This experiment validated the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction involving problem-solving and suggested improvements that were possible for future experiments.These initial experiments revealed an unexpected observation of rapid changes in emotion as players attempt to solve challenging problems.Attempts to explain this observation have led us to explore the role of emotion in reasoning during problem-solving.In the second part of the thesis, we review the literature on emotion and propose a cognitive model that describes how emotions influence the process by which subjects select chunks (concepts) for use in interpretation of a game situation.In particular, it is well known that problem-solving is strongly constrained by limits on the number of phenomena that can be considered at a time.To overcome this limit, human experts rely on abstraction to form new concepts (chunks) from emotionally salient phenomena.Our experiments indicate that emotion plays an important role, not only in the formation of concepts but also in the selection of concepts to use in reasoning.We hypothesize that expert players retain associations of concept with emotions in long-term memory and use these to guide the selection of concepts for reasoning.This view is in accordance with Damasio's Somatic Marker hypothesis (from 1991), which posits that emotions guide behavior, particularly when cognitive processes are overloaded.We present initial results from a follow-on experiment designed to explore the fidelity of our model and to search for evidence of the role of emotion in solving problems.Our model suggests that an association of emotions with recognized situations guides experts in their selection of partial game configurations for use in exploring the game tree

    Estimation du niveau d'expertise à partir du regard et des émotions

    No full text
    In this thesis, we are concerned with enabling technologies for collaborative intelligent systems.Effective collaboration requires that both the human and the computer share an understanding of their respective roles and abilities.In particular, it requires an ability to monitor the intentions and awareness of the partner in order to determine appropriate actions and behaviors.Cognitive science has much to offer in such an effort.In recent decades, researchers in cognitive science have developed theories and models that describe human abilities for attention, awareness, understanding, and problem-solving.In this thesis, we explore how such theories can inform informatics to enable technologies for Collaborative Artificial Intelligence.In particular, we use observations of humans with different levels of expertise engaged in solving classic chess problems to explore the effectiveness of models for visual attention, awareness, understanding, and problem-solving.We have constructed an instrument for capturing and interpreting multimodal signals of humans engaged in solving problems using off-the-shelf commercially available components combined with in-house software.Our instrument makes it possible to record body posture, gestures, facial expressions, pupil dilation, eye-scan, and fixation, as well as player interactions with the chess problem.When combined with self-reports, these recordings make it possible to construct computer models for the awareness and understanding of the game situation during problem-solving using concepts and models from cognitive science literature.As a first experiment, chess players were recorded while engaged in problems of increasing difficulty.These recordings were used to estimate a participant’s awareness of the current situation and to predict the ability to respond effectively to threats and opportunities.Analysis of the recordings demonstrates how eye-gaze, body posture, and emotional features can be used to capture and model situation awareness.This experiment validated the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction involving problem-solving and suggested improvements that were possible for future experiments.These initial experiments revealed an unexpected observation of rapid changes in emotion as players attempt to solve challenging problems.Attempts to explain this observation have led us to explore the role of emotion in reasoning during problem-solving.In the second part of the thesis, we review the literature on emotion and propose a cognitive model that describes how emotions influence the process by which subjects select chunks (concepts) for use in interpretation of a game situation.In particular, it is well known that problem-solving is strongly constrained by limits on the number of phenomena that can be considered at a time.To overcome this limit, human experts rely on abstraction to form new concepts (chunks) from emotionally salient phenomena.Our experiments indicate that emotion plays an important role, not only in the formation of concepts but also in the selection of concepts to use in reasoning.We hypothesize that expert players retain associations of concept with emotions in long-term memory and use these to guide the selection of concepts for reasoning.This view is in accordance with Damasio's Somatic Marker hypothesis (from 1991), which posits that emotions guide behavior, particularly when cognitive processes are overloaded.We present initial results from a follow-on experiment designed to explore the fidelity of our model and to search for evidence of the role of emotion in solving problems.Our model suggests that an association of emotions with recognized situations guides experts in their selection of partial game configurations for use in exploring the game tree.Dans cette thèse, nous explorons comment de telles théories issues des sciences cognitives peuvent servir de base à l'informatique pour favoriser l'émergence des technologies d'intelligence artificielle collaborative.En particulier, nous utilisons l'observation d'humains ayant différents niveaux d'expertise engagés dans la résolution de problèmes d'échecs classiques pour explorer l'efficacité des modèles pour l'attention visuelle, la prise de conscience, la compréhension et la résolution de problèmes.Nous avons construit un instrument pour la capture et l'interprétation de signaux multimodaux d'humains engagés dans la résolution de problèmes.Notre instrument permet d'enregistrer la posture du corps, les gestes, les expressions faciales, la dilatation de la pupille et les trajectoires oculaires, ainsi que les interactions du joueur avec le problème des échecs.Combinés aux rapports verbaux des joueurs, ces enregistrements permettent de construire des modèles informatiques pour la prise de conscience et la compréhension de la situation de jeu lors de la résolution de problèmes en utilisant des concepts et des modèles issus de la littérature des sciences cognitives.Dans le cadre d'une première expérience, les joueurs d'échecs ont été enregistrés alors qu'ils étaient engagés dans des problèmes de difficulté croissante.Ces enregistrements ont été utilisés pour estimer la conscience qu'avait un participant de la situation actuelle et pour prédire la capacité à répondre efficacement aux menaces et aux opportunités.L'analyse des enregistrements montre comment le regard, la posture du corps et les caractéristiques émotionnelles peuvent être utilisés pour capturer et modéliser la conscience de la situation.Cette expérience a validé l'utilisation de notre équipement comme outil général et reproductible pour l'étude des participants engagés dans une interaction sur écran impliquant la résolution de problèmes et a suggéré des améliorations possibles pour de futures expériences.Ces premières expériences ont révélé une observation inattendue de changements rapides dans les émotions des joueurs qui tentent de résoudre des problèmes difficiles.Les tentatives d'explication de cette observation nous ont amenés à explorer le rôle de l'émotion dans le raisonnement lors de la résolution de problèmes.Dans la deuxième partie de la thèse, nous passons en revue la littérature sur les émotions et proposons un modèle cognitif qui décrit comment les émotions influencent le processus par lequel les sujets sélectionnent des éléments cognitives (concepts) à utiliser dans l'interprétation d'une situation de jeu.En particulier, il est bien connu que la résolution de problèmes est fortement contrainte par les limites du nombre de phénomènes qui peuvent être considérés à la fois.Pour surmonter cette limite, les experts humains s'appuient sur l'abstraction pour former de nouveaux concepts à partir de phénomènes émotionnellement marqués.Nos expériences indiquent que l'émotion joue un rôle important, non seulement dans la formation des concepts mais aussi dans la sélection de ceux-ci dans le raisonnement.Nous émettons l'hypothèse que les experts conservent les associations de concepts et d'émotions dans la mémoire à long terme et les utilisent pour guider la sélection des concepts pour le raisonnement.Ce point de vue est conforme à l'hypothèse du marqueur somatique de Damasio (de 1991), qui avance que les émotions guident le comportement, en particulier lorsque les processus cognitifs sont surchargés.Nous présentons les premiers résultats d'une expérience conçue pour explorer la fidélité de notre modèle et pour rechercher des preuves du rôle des émotions dans la résolution des problèmes.Notre modèle suggère qu'une association des émotions avec des situations reconnues guide les experts dans leur sélection de configurations de jeu partielles à utiliser pour explorer l'arbre de jeu

    Estimation du niveau d'expertise à partir du regard et des émotions

    No full text
    In this thesis, we are concerned with enabling technologies for collaborative intelligent systems.Effective collaboration requires that both the human and the computer share an understanding of their respective roles and abilities.In particular, it requires an ability to monitor the intentions and awareness of the partner in order to determine appropriate actions and behaviors.Cognitive science has much to offer in such an effort.In recent decades, researchers in cognitive science have developed theories and models that describe human abilities for attention, awareness, understanding, and problem-solving.In this thesis, we explore how such theories can inform informatics to enable technologies for Collaborative Artificial Intelligence.In particular, we use observations of humans with different levels of expertise engaged in solving classic chess problems to explore the effectiveness of models for visual attention, awareness, understanding, and problem-solving.We have constructed an instrument for capturing and interpreting multimodal signals of humans engaged in solving problems using off-the-shelf commercially available components combined with in-house software.Our instrument makes it possible to record body posture, gestures, facial expressions, pupil dilation, eye-scan, and fixation, as well as player interactions with the chess problem.When combined with self-reports, these recordings make it possible to construct computer models for the awareness and understanding of the game situation during problem-solving using concepts and models from cognitive science literature.As a first experiment, chess players were recorded while engaged in problems of increasing difficulty.These recordings were used to estimate a participant’s awareness of the current situation and to predict the ability to respond effectively to threats and opportunities.Analysis of the recordings demonstrates how eye-gaze, body posture, and emotional features can be used to capture and model situation awareness.This experiment validated the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction involving problem-solving and suggested improvements that were possible for future experiments.These initial experiments revealed an unexpected observation of rapid changes in emotion as players attempt to solve challenging problems.Attempts to explain this observation have led us to explore the role of emotion in reasoning during problem-solving.In the second part of the thesis, we review the literature on emotion and propose a cognitive model that describes how emotions influence the process by which subjects select chunks (concepts) for use in interpretation of a game situation.In particular, it is well known that problem-solving is strongly constrained by limits on the number of phenomena that can be considered at a time.To overcome this limit, human experts rely on abstraction to form new concepts (chunks) from emotionally salient phenomena.Our experiments indicate that emotion plays an important role, not only in the formation of concepts but also in the selection of concepts to use in reasoning.We hypothesize that expert players retain associations of concept with emotions in long-term memory and use these to guide the selection of concepts for reasoning.This view is in accordance with Damasio's Somatic Marker hypothesis (from 1991), which posits that emotions guide behavior, particularly when cognitive processes are overloaded.We present initial results from a follow-on experiment designed to explore the fidelity of our model and to search for evidence of the role of emotion in solving problems.Our model suggests that an association of emotions with recognized situations guides experts in their selection of partial game configurations for use in exploring the game tree.Dans cette thèse, nous explorons comment de telles théories issues des sciences cognitives peuvent servir de base à l'informatique pour favoriser l'émergence des technologies d'intelligence artificielle collaborative.En particulier, nous utilisons l'observation d'humains ayant différents niveaux d'expertise engagés dans la résolution de problèmes d'échecs classiques pour explorer l'efficacité des modèles pour l'attention visuelle, la prise de conscience, la compréhension et la résolution de problèmes.Nous avons construit un instrument pour la capture et l'interprétation de signaux multimodaux d'humains engagés dans la résolution de problèmes.Notre instrument permet d'enregistrer la posture du corps, les gestes, les expressions faciales, la dilatation de la pupille et les trajectoires oculaires, ainsi que les interactions du joueur avec le problème des échecs.Combinés aux rapports verbaux des joueurs, ces enregistrements permettent de construire des modèles informatiques pour la prise de conscience et la compréhension de la situation de jeu lors de la résolution de problèmes en utilisant des concepts et des modèles issus de la littérature des sciences cognitives.Dans le cadre d'une première expérience, les joueurs d'échecs ont été enregistrés alors qu'ils étaient engagés dans des problèmes de difficulté croissante.Ces enregistrements ont été utilisés pour estimer la conscience qu'avait un participant de la situation actuelle et pour prédire la capacité à répondre efficacement aux menaces et aux opportunités.L'analyse des enregistrements montre comment le regard, la posture du corps et les caractéristiques émotionnelles peuvent être utilisés pour capturer et modéliser la conscience de la situation.Cette expérience a validé l'utilisation de notre équipement comme outil général et reproductible pour l'étude des participants engagés dans une interaction sur écran impliquant la résolution de problèmes et a suggéré des améliorations possibles pour de futures expériences.Ces premières expériences ont révélé une observation inattendue de changements rapides dans les émotions des joueurs qui tentent de résoudre des problèmes difficiles.Les tentatives d'explication de cette observation nous ont amenés à explorer le rôle de l'émotion dans le raisonnement lors de la résolution de problèmes.Dans la deuxième partie de la thèse, nous passons en revue la littérature sur les émotions et proposons un modèle cognitif qui décrit comment les émotions influencent le processus par lequel les sujets sélectionnent des éléments cognitives (concepts) à utiliser dans l'interprétation d'une situation de jeu.En particulier, il est bien connu que la résolution de problèmes est fortement contrainte par les limites du nombre de phénomènes qui peuvent être considérés à la fois.Pour surmonter cette limite, les experts humains s'appuient sur l'abstraction pour former de nouveaux concepts à partir de phénomènes émotionnellement marqués.Nos expériences indiquent que l'émotion joue un rôle important, non seulement dans la formation des concepts mais aussi dans la sélection de ceux-ci dans le raisonnement.Nous émettons l'hypothèse que les experts conservent les associations de concepts et d'émotions dans la mémoire à long terme et les utilisent pour guider la sélection des concepts pour le raisonnement.Ce point de vue est conforme à l'hypothèse du marqueur somatique de Damasio (de 1991), qui avance que les émotions guident le comportement, en particulier lorsque les processus cognitifs sont surchargés.Nous présentons les premiers résultats d'une expérience conçue pour explorer la fidélité de notre modèle et pour rechercher des preuves du rôle des émotions dans la résolution des problèmes.Notre modèle suggère qu'une association des émotions avec des situations reconnues guide les experts dans leur sélection de configurations de jeu partielles à utiliser pour explorer l'arbre de jeu

    Multimodal Observation and Classification of People Engaged in Problem Solving: Application to Chess Players

    Get PDF
    International audienceIn this paper we present the first results of a pilot experiment in the interpretation of multimodal observations of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other physiological signals can be used to model the cognitive state of subjects, and to explore the integration of multiple sensor modalities to improve the reliability of detection of human displays of awareness and emotion. Domains of application for such cognitive model based systems are, for instance, healthy autonomous ageing or automated training systems. Abilities to observe cognitive abilities and emotional reactions can allow artificial systems to provide appropriate assistance in such contexts. We observed chess players engaged in problems of increasing difficulty while recording their behavior. Such recordings can be used to estimate a participant's awareness of the current situation and to predict ability to respond effectively to challenging situations. Feature selection has been performed to construct a multimodal classifier relying on the most relevant features from each modality. Initial results indicate that eye-gaze, body posture and emotion are good features to capture such awareness. This experiment also validates the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction and/or problem solving

    The Role of Emotion in Problem Solving: First Results from Observing Chess

    Get PDF
    International audienceIn this paper we present results from recent experiments that suggest that chess players associate emotions to game situations and reactively use these associations to guide search for planning and problem solving. We describe the design of an instrument for capturing and interpreting multimodal signals of humans engaged in solving challenging problems. We review results from a pilot experiment with human experts engaged in solving challenging problems in Chess that revealed an unexpected observation of rapid changes in emotion as players attempt to solve challenging problems. We propose a cognitive model that describes the process by which subjects select chess chunks for use in interpretation of the game situation and describe initial results from a second experiment designed to test this model

    Video Lecture Design and Student Engagement: Analysis of Visual Attention, Affect, Satisfaction, and Learning Outcomes

    No full text
    The growing availability of online multimedia instructions, such as Massive Open Online Courses (MOOCs) mark a revolutionary new phase in the use of technology for education. Considering the high student attrition in MOOCs, it is crucial to study how students engage and disengage during their learning experience in relation to the video lecture design. The present study conducted a pilot user experiment (n = 24) to evaluate in a multimodal way which video lecture design is more effective for learning. Two video lecture designs were scrutinized: voice over slides, and slides overlaid by picture-in-picture instructor video. The experimental setup included different tracking technologies and sensorial modalities to gather synchronized data from the learning experience: eye-tracker, Kinect, frontal camera, and screen recording. Among the measures, eye-gaze observational data, facial expressions, and self-reported perceptions were analyzed and compared against the learning assessment results. Based on these results, engagement is discussed regarding the different video lecture designs by connecting the observational and self-reported data to the short-term learning outcomes
    corecore